Estimating the Entropy of Binary Time Series: Methodology, Some Theory and a Simulation Study

نویسندگان

  • Yun Gao
  • Ioannis Kontoyiannis
  • Elie Bienenstock
چکیده

Partly motivated by entropy-estimation problems in neuroscience, we present a detailed and extensive comparison between some of the most popular and effective entropy estimation methods used in practice: The plug-in method, four different estimators based on the Lempel-Ziv (LZ) family of data compression algorithms, an estimator based on the Context-Tree Weighting (CTW) method, and the renewal entropy estimator. METHODOLOGY: Three new entropy estimators are introduced; two new LZ-based estimators, and the “renewal entropy estimator,” which is tailored to data generated by a binary renewal process. For two of the four LZ-based estimators, a bootstrap procedure is described for evaluating their standard error, and a practical rule of thumb is heuristically derived for selecting the values of their parameters in practice. THEORY: We prove that, unlike their earlier versions, the two new LZ-based estimators are universally consistent, that is, they converge to the entropy rate for every finite-valued, stationary and ergodic process. An effective method is derived for the accurate approximation of the entropy rate of a finite-state hidden Markov model (HMM) with known distribution. Heuristic calculations are presented and approximate formulas are derived for evaluating the bias and the standard error of each estimator. SIMULATION: All estimators are applied to a wide range of data generated by numerous different processes with varying degrees of dependence and memory. The main conclusions drawn from these experiments include: (i) For all estimators considered, the main source of error is the bias. (ii) The CTW method is repeatedly and consistently seen to provide the most accurate results. (iii) The performance of the LZ-based estimators is often comparable to that of the plug-in method. (iv) The main drawback of the plug-in method is its computational inefficiency; with small word-lengths it fails to detect longer-range structure in the data, and with longer word-lengths the empirical distribution is severely undersampled, leading to large biases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Finite Sample Properties of Quantile Interrupted Time Series Analysis: A Simulation Study

Interrupted Time Series (ITS) analysis represents a powerful quasi-experime-ntal design in which a discontinuity is enforced at a specific intervention point in a time series, and separate regression functions are fitted before and after the intervention point. Segmented linear/quantile regression can be used in ITS designs to isolate intervention effects by estimating the sudden/level change (...

متن کامل

Estimating the Change Point of Binary Profiles with a Linear Trend Disturbance (Quality Engineering Conference Paper)

Identification of a real time of a change in a process, when an out-of-control signal is present is significant. This may reduce costs of defective products as well as the time of exploring and fixing the cause of defects. Another popular topic in the Statistical Process Control (SPC) is profile monitoring, where knowing the distribution of one or more quality characteristics may not be appropr...

متن کامل

Improving for Drum_Buffer_Rope material flow management with attention to second bottlenecks and free goods in a job shop environment

Drum–Buffer–Rope is a theory of constraints production planning methodology that operates by developing a schedule for the system’s first bottleneck. The first bottleneck is the bottleneck with the highest utilization. In the theory of constraints, any job that is not processed at the first bottleneck is referred to as a free good. Free goods do not use capacity at the first bottleneck, so very...

متن کامل

Modified Maximum Likelihood Estimation in First-Order Autoregressive Moving Average Models with some Non-Normal Residuals

When modeling time series data using autoregressive-moving average processes, it is a common practice to presume that the residuals are normally distributed. However, sometimes we encounter non-normal residuals and asymmetry of data marginal distribution. Despite widespread use of pure autoregressive processes for modeling non-normal time series, the autoregressive-moving average models have le...

متن کامل

Extension of Logic regression to Longitudinal data: Transition Logic Regression

Logic regression is a generalized regression and classification method that is able to make Boolean combinations as new predictive variables from the original binary variables. Logic regression was introduced for case control or cohort study with independent observations. Although in various studies, correlated observations occur due to different reasons, logic regression have not been studi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Entropy

دوره 10  شماره 

صفحات  -

تاریخ انتشار 2008